Goto

Collaborating Authors

 hu value


Interpretable Auto Window Setting for Deep-Learning-Based CT Analysis

Zhang, Yiqin, Chen, Meiling, Zhang, Zhengjie

arXiv.org Artificial Intelligence

Whether during the early days of popularization or in the present, the window setting in Computed Tomography (CT) has always been an indispensable part of the CT analysis process. Although research has investigated the capabilities of CT multi-window fusion in enhancing neural networks, there remains a paucity of domain-invariant, intuitively interpretable methodologies for Auto Window Setting. In this work, we propose an plug-and-play module originate from Tanh activation function, which is compatible with mainstream deep learning architectures. Starting from the physical principles of CT, we adhere to the principle of interpretability to ensure the module's reliability for medical implementations. The domain-invariant design facilitates observation of the preference decisions rendered by the adaptive mechanism from a clinically intuitive perspective. This enables the proposed method to be understood not only by experts in neural networks but also garners higher trust from clinicians. We confirm the effectiveness of the proposed method in multiple open-source datasets, yielding 10%~200% Dice improvements on hard segment targets.


AI prediction of cardiovascular events using opportunistic epicardial adipose tissue assessments from CT calcium score

Hu, Tao, Freeze, Joshua, Singh, Prerna, Kim, Justin, Song, Yingnan, Wu, Hao, Lee, Juhwan, Al-Kindi, Sadeer, Rajagopalan, Sanjay, Wilson, David L., Hoori, Ammar

arXiv.org Artificial Intelligence

Department of Radiology, Case Western Reserve University, Cleveland, OH, 44106, USA Abstract Background: Recent studies have used basic epicardial adipose tissue (EAT) assessments (e.g., volume and mean HU) to predict risk of atherosclerosis-related, major adverse cardiovascular events (MACE). Objectives: Create novel, hand-crafted EAT features, "fat-omics", to capture the pathophysiology of EAT and improve MACE prediction. We extracted 148 radiomic features (morphological, spatial, and intensity) and used Cox elastic-net for feature reduction and prediction of MACE. Results: Traditional fat features gave marginal prediction (EAT-volume/EAT-mean-HU/ BMI gave C-index 0.53/0.55/0.57, Significant improvement was obtained with 15 fat-omics features (C-index=0.69, Other high-risk features include kurtosis-of-EAT-thickness, reflecting the heterogeneity of thicknesses, and EATvolume-in-the-top-25%-of-the-heart, emphasizing adipose near the proximal coronary arteries. Kaplan-Meyer plots of Cox-identified, high-and low-risk patients were well separated with the median of the fat-omics risk, while high-risk group having HR 2.4 times that of the low-risk group (P<0.001). Conclusion: Preliminary findings indicate an opportunity to use more finely tuned, explainable assessments on EAT for improved cardiovascular risk prediction. Introduction Cardiovascular disease is a major cause of morbidity and mortality worldwide (1), leading to 17.9 million deaths globally each year (2). Numerous risk score methodologies have been developed to predict risks from cardiovascular disease, but these methods often lack sufficient discrimination (3). Accurate explainable risk prediction models will provide useful information to patients and physicians for more personalized medications and interventions. Previous studies have determined the usefulness of coronary calcification Agatston score as obtained from CT calcium score (CTCS) images for cardiovascular risk prediction.


Improved CT-based Osteoporosis Assessment with a Fully Automated Deep Learning Tool

#artificialintelligence

"Just Accepted" papers have undergone full peer review and have been accepted for publication in Radiology: Artificial Intelligence. This article will undergo copyediting, layout, and proof review before it is published in its final version. Please note that during production of the final copyedited article, errors may be discovered which could affect the content. To develop, test, and validate a deep learning (DL) tool that improves upon a previous feature-based CT image processing bone mineral density (BMD) algorithm and compare it against the manual reference standard. This single-center, retrospective study included manual L1 trabecular Hounsfield unit (HU) measurements from abdominal CT scans of 11,035 patients (mean age, 58 [SD] 12 years; 6311 women) as the reference standard.


Finding and Following of Honeycombing Regions in Computed Tomography Lung Images by Deep Learning

Eğriboz, Emre, Kaynar, Furkan, Albayrak, Songül Varli, Müsellim, Benan, Selçuk, Tuba

arXiv.org Machine Learning

In recent years, besides the medical treatment methods in medical field, Computer Aided Diagnosis (CAD) systems which can facilitate the decision making phase of the physician and can detect the disease at an early stage have started to be used frequently. The diagnosis of Idiopathic Pulmonary Fibrosis (IPF) disease by using CAD systems is very important in that it can be followed by doctors and radiologists. It has become possible to diagnose and follow up the disease with the help of CAD systems by the development of high resolution computed imaging scanners and increasing size of computation power. The purpose of this project is to design a tool that will help specialists diagnose and follow up the IPF disease by identifying areas of honeycombing and ground glass patterns in High Resolution Computed Tomography (HRCT) lung images. Creating a program module that segments the lung pair and creating a self-learner deep learning model from given Computed Tomography (CT) images for the specific diseased regions thanks to doctors are the main purposes of this work. Through the created model, program module will be able to find special regions in given new CT images. In this study, the performance of lung segmentation was tested by the S{\o}rensen-Dice coefficient method and the mean performance was measured as 90.7%, testing of the created model was performed with data not used in the training stage of the CNN network, and the average performance was measured as 87.8% for healthy regions, 73.3% for ground-glass areas and 69.1% for honeycombing zones.